-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
respond to more run status states and other GPT assistant housekeeping changes #665
Conversation
sidhujag
commented
Nov 13, 2023
•
edited
Loading
edited
- according to the spec there are other states we can handle in wait_for_run function, so I added those.
- added termination msg param. Pass through kwargs to super()
- register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.
1) according to the spec there are other states we can handle in wait_for_run function, so I added those. 2) added termination msg param. 3) register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode. 4) return empty array if while loop terminates for some reason without returning messages from the state machine (while loop)
I recieved ``` openai.BadRequestError: Error code: 400 - {'error': {'message': "1 validation error for Request\nbody -> role\n value is not a valid enumeration member; permitted: 'user' (type=type_error.enum; enum_values=[<RoleParam.USER: 'user'>])", 'type': 'invalid_request_error', 'param': None, 'code': None}} ``` When using message["role"] which uses "assistant" for send messages but the API assumes only user role coming into new messages in thread. Not sure how it works for you without this change?
…ole for end state msgs
Can you check again after latest changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Could you please add test to cover the new code? |
@sidhujag Hello, I would like to know if you have any updates? |
Yes im working on my own code quite a bit to have it functional.. but i can add coverage soon as i get time if someone else doesn't get to it before me |
@sonichi @IANTHEREAL @sidhujag, I haven't used @pytest.mark.skipif(
sys.platform in ["darwin", "win32"] or skip_test,
reason="do not run on MacOS or windows or dependency is not installed",
)
def test_process_messages_failed():
# Create a mock assistant thread and run
mock_thread = Mock()
mock_thread.id = 'thread1'
# Create a mock run with status 'failed' and a last error
mock_run = Mock()
mock_run.status = 'failed'
mock_run.last_error = 'Error message'
instance = GPTAssistantAgent(
"assistant",
llm_config={
"config_list": config_list,
},
)
# Call _process_messages and assert it returns the correct response
response = instance._process_messages(mock_thread, mock_run)
instance.delete_assistant()
assert response == {"role": "assistant", "content": 'Error message'} |
I believe for UT (Unit Testing) points, we should: 1) Ensure that for different states, the logic is processed as expected; 2) For each different state's return, ensure that a human agent or other agents can identify it and take the anticipated next action. Both of these can be expedited using mocks. For all the GPT Assistant response states that mocks can't cover, we rely on reviewers to ensure accuracy, or future E2E (End-to-End) testing. |
) | ||
|
||
self.cancellation_requested = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an useage example for this argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was assumed an external object may want to cancel because the run is looping in the state machine, this is a way to try to cancel before it finishes, wasting tokens or making other calls that are not intended
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, Implementing a timeout mechanism for the GPT Assistant Agent would be even better.
|
||
def _get_run_response(self, thread, run): | ||
def _process_messages(self, assistant_thread, run): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decorate the arguments, what types of object are thread and run
"role": "assistant", | ||
"content": 'Expired', | ||
} | ||
return new_messages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new_messages is not defined here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed this in this PR: https://github.com/syscoin/autogen/pull/1 @sidhujag @IANTHEREAL please take a look and add to your PR. I don't have write access to your repo.
@sidhujag Do you plan to continue working on this PR? |
Continued in #899 |